Search results for "k-medians clustering"

showing 9 items of 9 documents

Quantum clustering in non-spherical data distributions: Finding a suitable number of clusters

2017

Quantum Clustering (QC) provides an alternative approach to clustering algorithms, several of which are based on geometric relationships between data points. Instead, QC makes use of quantum mechanics concepts to find structures (clusters) in data sets by finding the minima of a quantum potential. The starting point of QC is a Parzen estimator with a fixed length scale, which significantly affects the final cluster allocation. This dependence on an adjustable parameter is common to other methods. We propose a framework to find suitable values of the length parameter σ by optimising twin measures of cluster separation and consistency for a given cluster number. This is an extension of the Se…

0301 basic medicineClustering high-dimensional dataMathematical optimizationCognitive NeuroscienceSingle-linkage clusteringCorrelation clustering02 engineering and technologyComputer Science ApplicationsHierarchical clusteringDetermining the number of clusters in a data set03 medical and health sciences030104 developmental biologyArtificial Intelligence0202 electrical engineering electronic engineering information engineeringCluster (physics)020201 artificial intelligence & image processingQACluster analysisAlgorithmk-medians clusteringMathematicsNeurocomputing
researchProduct

Structural clustering of millions of molecular graphs

2014

We propose an algorithm for clustering very large molecular graph databases according to scaffolds (i.e., large structural overlaps) that are common between cluster members. Our approach first partitions the original dataset into several smaller datasets using a greedy clustering approach named APreClus based on dynamic seed clustering. APreClus is an online and instance incremental clustering algorithm delaying the final cluster assignment of an instance until one of the so-called pending clusters the instance belongs to has reached significant size and is converted to a fixed cluster. Once a cluster is fixed, APreClus recalculates the cluster centers, which are used as representatives for…

Clustering high-dimensional dataFuzzy clusteringTheoretical computer sciencek-medoidsComputer scienceSingle-linkage clusteringCorrelation clusteringConstrained clusteringcomputer.software_genreComplete-linkage clusteringGraphHierarchical clusteringComputingMethodologies_PATTERNRECOGNITIONData stream clusteringCURE data clustering algorithmCanopy clustering algorithmFLAME clusteringAffinity propagationData miningCluster analysiscomputerk-medians clusteringClustering coefficientProceedings of the 29th Annual ACM Symposium on Applied Computing
researchProduct

Incrementally Assessing Cluster Tendencies with a~Maximum Variance Cluster Algorithm

2003

A straightforward and efficient way to discover clustering tendencies in data using a recently proposed Maximum Variance Clustering algorithm is proposed. The approach shares the benefits of the plain clustering algorithm with regard to other approaches for clustering. Experiments using both synthetic and real data have been performed in order to evaluate the differences between the proposed methodology and the plain use of the Maximum Variance algorithm. According to the results obtained, the proposal constitutes an efficient and accurate alternative.

Clustering high-dimensional datak-medoidsComputer scienceCURE data clustering algorithmSingle-linkage clusteringCanopy clustering algorithmVariance (accounting)Data miningCluster analysiscomputer.software_genrecomputerk-medians clustering
researchProduct

Distance-constrained data clustering by combined k-means algorithms and opinion dynamics filters

2014

Data clustering algorithms represent mechanisms for partitioning huge arrays of multidimensional data into groups with small in–group and large out–group distances. Most of the existing algorithms fail when a lower bound for the distance among cluster centroids is specified, while this type of constraint can be of help in obtaining a better clustering. Traditional approaches require that the desired number of clusters are specified a priori, which requires either a subjective decision or global meta–information knowledge that is not easily obtainable. In this paper, an extension of the standard data clustering problem is addressed, including additional constraints on the cluster centroid di…

Fuzzy clusteringCorrelation clusteringSingle-linkage clusteringConstrained clusteringcomputer.software_genreDetermining the number of clusters in a data setSettore ING-INF/04 - AutomaticaData clustering k–means Opinion dynamics Hegelsmann–Krause modelCURE data clustering algorithmData miningCluster analysisAlgorithmcomputerk-medians clusteringMathematics22nd Mediterranean Conference on Control and Automation
researchProduct

Comparison of Internal Clustering Validation Indices for Prototype-Based Clustering

2017

Clustering is an unsupervised machine learning and pattern recognition method. In general, in addition to revealing hidden groups of similar observations and clusters, their number needs to be determined. Internal clustering validation indices estimate this number without any external information. The purpose of this article is to evaluate, empirically, characteristics of a representative set of internal clustering validation indices with many datasets. The prototype-based clustering framework includes multiple, classical and robust, statistical estimates of cluster location so that the overall setting of the paper is novel. General observations on the quality of validation indices and on t…

Fuzzy clusteringlcsh:T55.4-60.8Computer scienceSingle-linkage clusteringCorrelation clustering02 engineering and technologycomputer.software_genrelcsh:QA75.5-76.95Theoretical Computer Scienceprototype-based clusteringCURE data clustering algorithm020204 information systemsprototype-based clustering; clustering validation index; robust statisticsConsensus clusteringalgoritmit0202 electrical engineering electronic engineering information engineeringlcsh:Industrial engineering. Management engineeringCluster analysisk-medians clusteringta113Numerical Analysisbusiness.industryPattern recognitionDetermining the number of clusters in a data setComputational MathematicsComputingMethodologies_PATTERNRECOGNITIONComputational Theory and Mathematicsrobust statistics020201 artificial intelligence & image processinglcsh:Electronic computers. Computer scienceArtificial intelligenceData miningtiedonlouhintabusinessclustering validation indexcomputerAlgorithms
researchProduct

Statistical power of disease cluster and clustering tests for rare diseases: A simulation study of point sources

2012

Abstract Two recent epidemiological studies on clustering of childhood leukemia showed different results on the statistical power of disease cluster and clustering tests, possibly an effect of spatial data aggregation. Eight different leukemia cluster scenarios were simulated using individual addresses of all 1,009,332 children living in Denmark in 2006. For each scenario, a number of point sources were defined with an increased risk ratio at centroid, decreasing linearly to 1.0 at the edge; aggregation levels were administrative units of Danish municipalities and squares of 5, 12.5 and 25 km 2 . Six statistical methods were compared. Generally, statistical power decreased with increasing s…

MaleAdolescentEpidemiologyDenmarkHealth Toxicology and MutagenesisGeography Planning and DevelopmentDisease clustercomputer.software_genreStatistical powerStatisticsCluster AnalysisHumansComputer SimulationPoint (geometry)Poisson DistributionChildCluster analysisSpatial analysisk-medians clusteringMathematicsStatistical hypothesis testingSpatial AnalysisLeukemiaModels StatisticalData CollectionIncidenceCentroidInfectious DiseasesChild PreschoolFemaleData miningcomputerSpatial and Spatio-temporal Epidemiology
researchProduct

Centroid-based Cluster Analysis of HVSR Data for Seismic Microzonation

2014

Horizontal to Vertical Spectral Ratio (HVSR) datasets acquired for studies of seismic microzoning in various urban centers of Sicilian towns, have been used to test clustering analysis through a nonhierarchical centroid-based algorithm. In this context clustering techniques may be useful to identify areas with similar seismic behaviour through HVSR data. Centroid-based algorithms generally require the number of clusters, k, and the initial centroid coordinates to be specified in advance. This aspect is considered to be one of the biggest drawbacks of these algorithms. The proposed algorithm doesn’t limit the number of k clusters and choose the initial centroids automatically from the data s…

Regional geologySeismic microzonationHVSRCluster AnalysiCentroidContext (language use)Data setSettore GEO/11 - Geofisica ApplicataSeismic microzonationCluster analysisGeomorphologyAlgorithmk-medians clusteringGeologyParametric statistics
researchProduct

An algorithm for earthquakes clustering based on maximum likelihood

2007

In this paper we propose a clustering technique set up to separate and find out the two main components of seismicity: the background seismicity and the triggered one. We suppose that a seismic catalogue is the realization of a non homogeneous space-time Poisson clustered process, with a different parametrization for the intensity function of the Poisson-type component and of the clustered (triggered) component. The method here proposed assigns each earthquake to the cluster of earthquakes, or to the set of independent events, according to the increment to the likelihood function, computed using the conditional intensity function estimated by maximum likelihood methods and iteratively chang…

business.industryPattern recognitionMaximum likelihood sequence estimationPoisson distributionPoint processPhysics::Geophysicssymbols.namesakeCURE data clustering algorithmsymbolsETAS model earthquakes point process clusteringArtificial intelligenceSettore SECS-S/01 - Statisticaclustering earthquakesCluster analysisLikelihood functionbusinessAlgorithmPoint processes conditional intensity function likelihood function clustering methodRealization (probability)k-medians clusteringMathematics
researchProduct

SparseHC: A Memory-efficient Online Hierarchical Clustering Algorithm

2014

Computing a hierarchical clustering of objects from a pairwise distance matrix is an important algorithmic kernel in computational science. Since the storage of this matrix requires quadratic space with respect to the number of objects, the design of memory-efficient approaches is of high importance to this research area. In this paper, we address this problem by presenting a memory-efficient online hierarchical clustering algorithm called SparseHC. SparseHC scans a sorted and possibly sparse distance matrix chunk-by-chunk. Meanwhile, a dendrogram is built by merging cluster pairs as and when the distance between them is determined to be the smallest among all remaining cluster pairs. The k…

sparse matrixClustering high-dimensional dataTheoretical computer scienceonline algorithmsComputer scienceSingle-linkage clusteringComplete-linkage clusteringNearest-neighbor chain algorithmConsensus clusteringmemory-efficient clusteringCluster analysisk-medians clusteringGeneral Environmental ScienceSparse matrix:Engineering::Computer science and engineering [DRNTU]k-medoidsDendrogramConstrained clusteringHierarchical clusteringDistance matrixCanopy clustering algorithmGeneral Earth and Planetary SciencesFLAME clusteringHierarchical clustering of networkshierarchical clusteringAlgorithmProcedia Computer Science
researchProduct